Skip to main content

tv   The Stream 2018 Ep 59  Al Jazeera  April 12, 2018 7:32am-8:01am +03

7:32 am
the daughter of the former russian spy who was poisoned along with her father last month has made her first public statement since leaving hospital the u.k. government accuses moscow of being behind the suspected nerve agent attack. and his daughter. algeria has announced three days of mourning after a plane crash killed two hundred fifty seven people the military plane went down shortly after taking off from a military airport near the capital algiers the government says most of the dead were soldiers and their families is the worst ever disaster in the world deadliest plane crash since two thousand and fourteen saudi arabia says it's intercepted a ballistic missile fired from yemen over the capsule riyadh. responsibility for the attack through a twitter account associated with the rebels say they were targeting the saudi defense ministry. many of these advancements showed the capabilities of the this is what changed the equation and the rules of the political game and also
7:33 am
managed to change the strategy of defense that's what we've noted these missiles can determine the targets coordinates and hit them accurately facebook c.e.o. mark zuckerberg has faced more tough questions on his second day of testimony about the massive breach of uses private data facebook has been in the spotlight off the personal information of eighty seven million users was harvested by the political consultant cambridge analytic. as i had lines back with more news on al-jazeera after the stream. us president donald trump has said he will slap new charis on imports of steel an alum in your bra by gene will mean the data transfer ten times faster than forty we bring you the stories the economic world we live in counting the cost of this time on al-jazeera. you know everyone my name so you're going to give an associate director for the gun violence
7:34 am
prevention team of the center for american progress and your interest. high up on the ok welcome to the stream everybody really could be loud if you're watching us on you tube chat us your comments or questions we'll try to get them into the show today kilo robotic technology should it be banned we'll look at the debate over letting weapons decide when to pull the trigger. using artificial intelligence or ai weapons of war being programmed to be smarter and more independent but should machines be given the ability to decide when and who to kill representatives this week in geneva to consider a preemptive ban on all thomas weapons technology that may someday power what activists call killer robots the world's high tech militaries including the u.s.
7:35 am
russia and israel are all eagerly pursuing a i applications but is there a line developers shouldn't cross when creating weapons designed to think for themselves or with us to talk about this in london ok schwartz she's a member of the international committee for robot arms control or i crack she lectures at the university of leicester and is the author of a soon to be released book death machines the ethics of violent technologies in geneva paul shari he is senior fellow and director of the technology and national security program at the center for a new american security and also author of a forthcoming book army of nine autonomous weapons and the future of war and in san francisco alex l. keeper he is a writer and futurist and the co-author of the driver in the driverless car welcome to all of our guests we could have a book club with just three. here so we asked our community their thoughts
7:36 am
on laws which is lethal autonomous weapon systems and what some are calling killer robots this is what one person tweeted us back and that's because this is what's in the public imagination it's a gift and it's from the black mirror and it depicts a robot designed to kill this robot in fact is taking a knife and it's going to pursue a woman but paula how far from reality is this what's your definition of a lethal autonomous weapon the kinds of intelligent robots that are in that black mirror episode which is a great. a ways away still there still in science fiction it would be possible to build weapons today hunted military targets things like radars that are emitting in the electromagnetic spectrum would be easier to distinguish particularly if you're flying through the air or or it's you would there's not a lot of well there are objects or civilians in the way. i guess i'm just curious
7:37 am
about this phrase killer robots when i say it i don't know if it's a cry or it's a lawful i have to exact opposite emotions alex that this time is it helpful to us to understand exactly what we're talking about. and say but that's right i don't think i'm off i get you to a feeling that shapes. just give you a beat catch your breath and list this race killer robots yeah i think it's a useful tool. against sanitizing what the technology could potentially do which is kill so it might be slightly overdrawn it might appear to be slightly overdrawn and across it invokes always the idea of the image of terminator but i think it's useful to have a bucket of chum analogy in order to also bring it home to the general public what the signal energy could potentially be on about so i think it's quite
7:38 am
a useful term is it technically accurate. is a different story about i think of the useful i'm just selling out so everybody understands for the rest of this conversation. a killer robot. autonomous weapon system is a weapon that can make a decision about a kill without a human necessarily being involved in that equation that weapon cahill by itself but so that the controversial technology that is being debated is lethal autonomous weapon systems and the function that is most controversial got the critical functions which means engaging selecting a target and engaging in a target meaning finding somebody and killing somebody. bluntly and so actually just interject they're going to be situations where humans may be killed because the system is doing its job recognizing an image or footprint so for example there's talking about if there's a human in the radio station they would die if it struck or if it is programmed to recognize a tank as a russian style tank you know field of battle but it is
7:39 am
a target there are humans in the tank and they could die so there's a sort of these gradations along the way it's not just wind you know human identify them kill them so there's definition when you're not you're out when you're talking about the gradations i hear you paula i'll get this to you alex mentions the gradations i get what you're saying but i want to make sure our audience is on the same. one person sent us an example and i want to hear what you think about it t.v. gordon says when that man in dallas killed four cops they blew him up with a robot so this is clearly real and happening now and what he is referencing is something that happened in twenty sixteen this is from the guardian use of police robot to kill the dallas shooting suspect believed to be first in u.s. history and it was the lethal use of a bomb disposal robot is that what we're talking about or is it slightly different because the robot itself didn't decide to deliver that bomb. but did not did not make the kill decision i'll apologize. it's
7:40 am
a very different i mean there are there are sixteen countries today that already have weaponized robots and a number of non-state groups so the debate going on internationally about autonomous weapons is really looking forward into the future and saying ok describe what happens when the robots themselves are making the decisions and so it's not people killing other people with robots it's people launching robots and then the robots are making these decisions there is some debate over whether we're talking about robots that are targeting people or military objects and depending on who he talks to people of very different views about you know what's acceptable i want to share with people because we're not in science fiction right now there's a story that you put out in the wall street journal and you say meet the new robot intelligent machines that could be in an era of autonomous warfare already here i want to give people a couple of examples these are illustrations but these are. pieces of weaponry that
7:41 am
are being used right now the you ran nine what's the one nine what does that do. so it's there you are nine is a russian ground combat nicole that is entirely robotic it is equipped with a machine gun and anti-tank rockets so it would you know designed to go toe to toe with other tanks and then blow months and there'd be no one always at all inside it now what's not clear is who's pulling the trigger is there a human remotely deciding and that is really the essence of the question with and it's not happening or is the robot deciding on its own we don't know you can look at it like you can't see from the outside what software has and how it's think you can. do another one here this is interesting this takes us into the oceans here scroll down a little yellow this isn't us is this sea hunter yeah this is a u.s. navy ship that is totally unmanned on the surface that's designed to hunt for enemy submarines right and it's clearly unarmed has no weapons but the u.s.
7:42 am
has talked about putting missiles on it in the future so this is actually out there why are we getting illustrations pull and not the real thing and i think it is so i think. that maybe it was all right. well i could show you but then i'd have to kill you know one one more hey this is a long range and to ship missile. yes so this is the long range missile. it's sort of on the cutting edge of intelligent missiles today yeah a human still decides the target right there you see ok. a human said i'm going to take out that ship with the missile has a lot about on me and how it gets to the ship and to navigate on its own so what i understand i read your article very carefully is that if it sees obstacles on its way it's not going to do a little detour and blow up something else it's going to make a decision and king going on to its target and the missile scotto that it's going to be thinking that's right and the missile has the ability all on its own to avoid
7:43 am
other threats that might be in its way caesar ship between on their way to a target that its uses that's right it was new around it to stay on course to its target if it works as designed which is a question a couple people are bringing up what happens when they don't and who then takes responsibility for that i want to play a video comment from thompson chan get out and he talks about some of the legal implications behind this and direct this to you have a listen we're killer. we are likely to hear contributed to give you a new you mean responsibility for used weapons this is an affront to the record principle under international human rights law that victims give the right to remove it right to remove includes prosecution of the could be treated
7:44 am
in the initial human terran legal norms timeless in this case it is not the law that his to keep up with the technology it is due to that should keep all conform with the law so ok who is responsible if the robot makes a mistake. yes may have but the million dollar question really and i wholeheartedly agree with tom's comment in the in the video clip the problem really is that when you have to acknowledge the way you don't exactly know where the decision is made or how the business is made. who can be accountable. and you can feel responsible about it so it's not as easy as it would be with a ok i'm using chill and i'm actually actively pulling the trigger i am that a responsible as a whole decision mechanism which cannot necessarily be attributed to a single person or
7:45 am
a unit but rather could be happening either decision could be happening by programming through an operator and a whole system of participants so putting your finger on accountability or responsibility in the first place and then big accountability is really difficult with the systems and that is obviously a huge problem for existing frameworks. i also add that countermeasures are something that we've seen in military systems forever they will be deployed against the ai systems as well to potentially to really tragic results so i mean we already see systems or in situations where non-state actors use human shields to hide military targets behind them or you could see that is the reverse for political means meaning that they make a school bus look like a tank to draw a i fire additionally there's a whole field of computer science know in ai called adversarial that. are adversarial attacks we make things appear to be something that they're not in order
7:46 am
to fool the ai so i mean as we strip humans out of the decision loop and the time between essentially the decision being made and the impact happening or the trigger being called the bomb warning up goes to nearly zero the time to correct these kind of errors shrinks and becomes negligible so what the the problem here is the accountability gap is is a risk it could happen but there's actually no principle in the last of war that says you have to hold an individual person accountable that's actually not a thing it's appealing personally i think to many people to say well who's responsible for this but there's nothing in the laws of war that says you have to have that and there are actually then that that happened today with people with humans. sure but there is something really profoundly morally troubling about not having anybody be accountable specifically when it comes to kill decisions specifically when it comes to decisions to eliminate a human being or an entire group of human beings so saying that ok well this may happen this may not happen it happens in other contexts it's not really how people
7:47 am
. unpack or perhaps even limits what is quite kid in a country ability with systems in which a decision may be preprogrammed. in which you're not i mean our adage shouldn't be taken so right there actually isn't really what i'm actually describing countermeasure it's like different terrorist group decided to fall in a i instead ended up walking up a school and you get those things happen today they do they do with their eye it's going to be much more difficult much more challenging and i don't feel as we did most of what's happening with ai we make it sound like it's some were prior to magic technology it's actually old one source software in the computer necessary to run it is shrinking and shrinking and i will be something that every actor can use that is able to affect what manipulate data systems then here is another problem with the bias by if an artificial intelligence systems and this is something that the entire artificial intelligence community now if you want to just want to just
7:48 am
give an example for us a very quick visit example for what you mean by ice. ok so my prime example always bias is something that we're all familiar with but if you go into your google search engine and you type in vo and this is an example of kate crawford uses when she talks about are biased. and you type in what you will find is usually usually largely white middle aged men as women as barbie as a c.e.o. but sort of scattered among the beautiful women it's all blonde women too. so we won't stop and also how does it makes know that. i've actually written about algorithmic yes your studies are saying to continue yes so the way things are labels. in order to be able to be. process through algorithms is by no means clear cut right than that or that if you know in the
7:49 am
training that it. is all decisions that are made before hand and they may not necessarily represent the entire diversity of. the population you're dealing with and so once that is in the system it's very difficult to untangle or disentangle. you know what is already data that is coming in as biased and what comes out. ok you're basically saying we could end up with racist weaponry yeah definitely. i hear you there alex but i am glad that you brought that up because we've got this question specifically on that case what your human biases will be programmed into the robot she's assuming that they already will be but of course we're all biased so that is a good assumption but i want to shift just a little bit here to bring up this tweet from anthony who says the list of countries who wants to ban them is the same as the list of countries that can make them be careful what you're cheering for it may not exactly be accurate but paul his point there i think is well taken and you being someone who actually has been
7:50 am
in the arena of war you are in the military what do you make of being on a battlefield when automated weapons are either alongside you on your side or on the opposing side. when we look at america it's a great point when you look at the list of countries that have said they support a brand none of them are leading military developers and what's driving so many of them their desire to support a band is not humanitarian concerns it's politics now the ngos like i crack and others they're motivated by humanitarian concern but for many of these countries they don't have to know thomas weapons or to know that they're against them because they know they're not the ones building them and we look at some of the worst a lot of them are not leaders in global human rights and so you know enter nationally all some of the we're drugs the conversation is politics among nations. i'm just curious about this list despite all of this down to basics the fact that you go to war using
7:51 am
a machine on the sense of my point that's making its own decisions ok why is that worse than going to war without that. because i think when we go to war we have to think about how to achieve peace so there should be a reluctance towards violence or you could use the violence the church not but we're already having l.k. that's not even real that's not realistic i mean that's a historian's perspective know that at the end of this war that we're already in that there's going to be peace negotiations but we are in it right now and we're killing people why do you why would you be more upset about how you kill somebody. because certain technologies i fear lower the threshold for the use of violence so there's more violence and there's plenty of scholarship that shows that more violence is not likely to lead to a more peaceful context down the road in fact there's an escalator. dimension to
7:52 am
that so as you lower the threshold for the use of violence and as you feel or as the as the current thinking becomes the thinking becomes ok we can have this technology where we can perhaps engage. more risk free in. the application of violence i think distortion takes place in terms of how or the dimensions that are necessary to solve the conflict so ok there's a magician who just aren't there someone who agrees that ok here that i just want to get them in this is live on on you tube tom says would be killer robots desensitize humans to the already atrocious acts that they would be committing so picking up on your point there but on the other side alex this person says having killer robots do the fighting would keep soldiers and police officers out of harm's way alex. so what i was going to point out is that emotionally i were kind of want to agree with what else you're saying but if you look historically as we've
7:53 am
injected more and more technology into warfare the levels of casualties and violence and incidents of workers actually gone down i mean at least that's according to what steven pinker writes about and you know angels mcgregor nature and you know we don't know enough necessarily about ai to think to be sure that it will be worse to have them better and i'll give a specific example i mean if there was a way to codified. not to kill civilians or to recognize children very clearly how does a much better job of this kind of stuff over time in the heat of battle and it also doesn't get tired or snappy you're angry it doesn't take drugs it isn't drunk you know it won't make the kind of human errors so i'm kind of on the fence both ways which i like better counter and also contact messy ican consecrates you with with you know continuous humanitarian act the now take. it to have mercy could you do it i'll do it in that's a masterful algorithm and what about the subject line it is no sexual violence in the film either so absolute i mean as horrendous things humans will just there's
7:54 am
absolutely no doubt about that humans are entirely not to a level and that's terrible things being done and i wouldn't. by any stretch of the imagination try to mitigate that but we also have to consider the fact that humans can also have a positive things good things this empathy their compassion their ways of acting human relationships your human relations that i think are important to or. not do away with for the sake of technology i don't think you can you can code. you can call it a loss. rules or ethical guidelines into machinery in some way and some form or another but that relies on some sort of quantification of what you think is good and what is bad and everything that you cannot mathematically quantify program into a system or code then kind of falls by the wayside pull out and i think there are
7:55 am
ways that you know i do think that there are ways that we can find this a balance both of these concerns to use the technology in ways that reduces the incontinence and reduce the risk to soldiers but also hold on to our humanity you know one of the things that people mix up a lot of times is the value of physical robots and giving soldiers or police officers more standoff from threats and then autonomy us weapons that would make their own decisions and i don't like the term killer robots and i think it mixes up all of these important differences among the technology. if you could increase the physical distance so people are not harm's way and then have to make a decision in the moment do i shoot right now to defend muscles that could reduce harm in warfare and would be a good thing and there might be other things we could use a on to make factual decisions is this person holding a rifle or are there holding a rate and we could do that we could do that was a to that i who was ok we probably still want to find ways to use this that we will
7:56 am
hold on to our humanity that we don't get to a place where humans no longer care about what happens in war i think that would be terrible for paul it sounds like you're saying there is a middle ground here there is a way to make this work but i bring john tweet in here he says lethal automated weapon systems are problematic because it's uncertain that their automated intelligence algorithms are capable of meeting the international humanitarian law requirements and so alex i actually give this one to you because right now we have state actors who don't meet international human he managed requirements what is it about these robots that means they might be different is there a way to program them so that they do. the theory you could hold them to meet any requirement is ok said if it can be reduced to mathematics of some sort a code of conduct then you can program to meet that code of conduct though there will always be edge cases because the world is very messy but at the same time it certainly is easier to encode that type of thing in you know in the computer code
7:57 am
that it is to figure out how it works in our brains at this point but one other point to make around this is that. what are we like it or not we're going to have to fund a lot of research around this even if we have banned it because we're going to have to have the capability for countermeasures should someone else use it or just to be ability to deal with it in case it gets out of pandora's box i just want a lightning round in the sixty seconds we have left in ten years time i will we have killer i want. someone's going to build them someone will care about international humanitarian law whether it's terrorists or right you know rogue acting like or charlotte sobs terrorizing civilians like someone will become a stand sometimes when we do it right now ok when we have them. we're probably to somebody well probably built them but i would really wish that we have some sort of robust international legal framework or some sort of norm established that they use
7:58 am
ok for the whole actions of. wrestling with this ethical conversation make up fairy rashid's here says she doesn't think there's a willingness by governments to ban the use of ai robots and it would be the equivalent of governments turning in their guns and promoting world peace thanks for watching the stream you'll find plenty come myself online that i still see the next time. i. people. see. documentaries that open your eyes at this time on al-jazeera. the nature news
7:59 am
as it breaks this was a great election about it was going to win but it was about by how much with detailed coverage the syrian civil war and the moves into the states what is the different is that each the some people will live until to morrow so many innocent people will die from around the world the bats and balls are several years old the really good player could end up trading cricket academy and maybe one day play for the national team. to. push for. a new poll ranks mexico city is the pull first in the world for sexual violence many women are attacked while moving in the crowded spaces of the metro buses and even at the hands of taxi drivers the conversation starts with do you have
8:00 am
a boyfriend to your very pretty and young you feel unsafe threatened you think about how to react what do i do if this gets way no money on the uses a new service it's called loyal droid it's for women cus a jew is only a drum by women drivers pull for some extra features like a panic button and twenty force of among the training of dr is. a story fourteen hundred years in the making. a story of succession and leadership. is the story the foundation. and the emergence of an empire. the caliph episode one.

22 Views

info Stream Only

Uploaded by TV Archive on